Hybrid (LRU) Page-Replacement Algorithm
نویسندگان
چکیده
منابع مشابه
VAR-PAGE-LRU A Buffer Replacement Algorithm Supporting Different Page Sizes
Non-standard applications (such as CAD/CAM etc.) require new concepts and implementation techniques at each layer of an appropriate database management system. The buffer manager, for example, should support either different page sizes, set-oriented operations on pages, or both in order to deal with large objects in an efficient way. However, implementing different page sizes causes some new bu...
متن کاملLRU-SP: A Size-Adjusted and Popularity-Aware LRU Replacement Algorithm for Web Caching
This paper presents LRU-SP, a size-adjusted and popularity-aware extension to Least Recently Used (LRU) for caching web objects. The standard LRU, focusing on recently used and equal sized objects, is not suitable for the web context because web objects vary dramatically in size and the recently accessed objects may possibly differ from popular ones. LRU-SP is built on two LRU extensions, namel...
متن کاملA Dueling Segmented LRU Replacement Algorithm with Adaptive Bypassing
In this paper we present a high performance cache replacement algorithm called Dueling Segmented LRU replacement algorithm with adaptive Bypassing (DSB). The base algorithm is Segmented LRU (SLRU) replacement algorithm originally proposed for disk cache management. We introduce three enhancements to the base SLRU algorithm. First, a newly allocated line could be randomly promoted for better pro...
متن کاملToken-ordered LRU: an effective page replacement policy and its implementation in Linux systems
Most computer systems use a global page replacement policy based on the LRU principle to approximately select a Least Recently Used page for a replacement in the entire user memory space. During execution interactions, a memory page can be marked as LRU even when its program is conducting page faults. We define the LRU pages under such a condition as false LRU pages because these LRU pages are ...
متن کاملCRC: Protected LRU Algorithm
Additional on-chip transistors as well as more aggressive processors have led the way for an ever expanding memory hierarchy. Multi-core architectures often employ the use of a shared L3 cache to reduce accesses to off chip memory. Such memory structures often incur long latency (as much as 30 cycles in our framework) and are configured to retain sets as large as 16 way. A baseline replacement ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: International Journal of Computer Applications
سال: 2014
ISSN: 0975-8887
DOI: 10.5120/15966-5350